Multiplicative algorithms for maximum penalized likelihood inversion with nonnegative constraints and generalized error distributions
نویسنده
چکیده
In many linear inverse problems the unknown function f (or its discrete approximation θp×1), which needs to be reconstructed, is subject to the nonnegative constraint(s); we call these problems the nonnegative linear inverse problems (NNLIPs). This paper considers NNLIPs. However, the error distribution is not confined to the traditional Gaussian or Poisson distributions. We adopt the exponential family of distributions where Gaussian and Poisson are special cases. We search for the nonnegative maximum penalized likelihood (NNMPL) estimate of θ. The size of θ often prohibits direct implementation of the traditional methods for constrained optimization. How to develop easy-to-implement algorithms for the NNMPL estimates is an interesting and challenging question. Given that the measurements and point-spread-function (PSF) values are all nonnegative, we propose a simple multiplicative iterative algorithm. We show that if there is no penalty, then this algorithm is almost sure to converge; otherwise a relaxation or line search is necessitated to assure
منابع مشابه
Estimation in Simple Step-Stress Model for the Marshall-Olkin Generalized Exponential Distribution under Type-I Censoring
This paper considers the simple step-stress model from the Marshall-Olkin generalized exponential distribution when there is time constraint on the duration of the experiment. The maximum likelihood equations for estimating the parameters assuming a cumulative exposure model with lifetimes as the distributed Marshall Olkin generalized exponential are derived. The likelihood equations do not lea...
متن کاملPenalized Bregman Divergence Estimation via Coordinate Descent
Variable selection via penalized estimation is appealing for dimension reduction. For penalized linear regression, Efron, et al. (2004) introduced the LARS algorithm. Recently, the coordinate descent (CD) algorithm was developed by Friedman, et al. (2007) for penalized linear regression and penalized logistic regression and was shown to gain computational superiority. This paper explores...
متن کاملA comparison of algorithms for maximum likelihood estimation of Spatial GLM models
In spatial generalized linear mixed models, spatial correlation is assumed by adding normal latent variables to the model. In these models because of the non-Gaussian spatial response and the presence of latent variables the likelihood function cannot usually be given in a closed form, thus the maximum likelihood approach is very challenging. The main purpose of this paper is to introduce two n...
متن کاملOn Bivariate Generalized Exponential-Power Series Class of Distributions
In this paper, we introduce a new class of bivariate distributions by compounding the bivariate generalized exponential and power-series distributions. This new class contains the bivariate generalized exponential-Poisson, bivariate generalized exponential-logarithmic, bivariate generalized exponential-binomial and bivariate generalized exponential-negative binomial distributions as specia...
متن کاملGeneralized Alpha-Beta Divergences and Their Application to Robust Nonnegative Matrix Factorization
We propose a class of multiplicative algorithms for Nonnegative Matrix Factorization (NMF) which are robust with respect to noise and outliers. To achieve this, we formulate a new family generalized divergences referred to as the Alpha-Beta-divergences (AB-divergences), which are parameterized by the two tuning parameters, alpha and beta, and smoothly connect the fundamental Alpha-, Betaand Gam...
متن کامل